- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Carlyn, David (3)
-
Chao, Wei-Lun (3)
-
Berger-Wolf, Tanya (2)
-
Karpatne, Anuj (2)
-
Bakis, Yasin (1)
-
Balhoff, James P. (1)
-
Balk, Meghan A. (1)
-
Bart, Henry L. (1)
-
Carstens, Bryan (1)
-
Chang, Feng-Ju (1)
-
Charpentier, Caleb (1)
-
Chen, Hong-You (1)
-
Chowdhury, Arpita (1)
-
Dahdul, Wasila (1)
-
Elhamod, Mohannad (1)
-
Khurana, Mridul (1)
-
Lapp, Hilmar (1)
-
Mabee, Paula M. (1)
-
Manogaran, Harish Babu (1)
-
Paul, Dipanjyoti (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We present a novel usage of Transformers to make image classification interpretable. Unlike mainstream classifiers that wait until the last fully connected layer to incorporate class information to make predictions, we investigate a proactive approach, asking each class to search for itself in an image. We realize this idea via a Transformer encoder-decoder inspired by DEtection TRansformer (DETR). We learn “class-specific” queries (one for each class) as input to the decoder, enabling each class to localize its patterns in an image via cross-attention. We name our approach INterpretable TRansformer (INTR), which is fairly easy to implement and exhibits several compelling properties. We show that INTR intrinsically encourages each class to attend distinctively; the cross-attention weights thus provide a faithful interpretation of the prediction. Interestingly, via “multi-head” cross-attention, INTR could identify different “attributes” of a class, making it particularly suitable for fine-grained classification and analysis, which we demonstrate on eight datasets. Our code and pre-trained models are publicly accessible at the Imageomics Institute GitHub site: https://github.com/Imageomics/INTR.more » « less
-
Tu, Cheng-Hao; Chen, Hong-You; Carlyn, David; Chao, Wei-Lun (, Proceedings of the AAAI Conference on Artificial Intelligence)Fractals are geometric shapes that can display complex and self-similar patterns found in nature (e.g., clouds and plants). Recent works in visual recognition have leveraged this property to create random fractal images for model pre-training. In this paper, we study the inverse problem --- given a target image (not necessarily a fractal), we aim to generate a fractal image that looks like it. We propose a novel approach that learns the parameters underlying a fractal image via gradient descent. We show that our approach can find fractal parameters of high visual quality and be compatible with different loss functions, opening up several potentials, e.g., learning fractals for downstream tasks, scientific understanding, etc.more » « less
-
Elhamod, Mohannad; Khurana, Mridul; Manogaran, Harish Babu; Uyeda, Josef C.; Balk, Meghan A.; Dahdul, Wasila; Bakis, Yasin; Bart, Henry L.; Mabee, Paula M.; Lapp, Hilmar; et al (, KDD 2023 Proceedings. 29TH ACM SIGKDD. Conference on Knowledge Discovery and Data Mining.)
An official website of the United States government

Full Text Available